MG205: Econometrics Theory and Applications

Empirical Exercise 6: Discrimination

Jose Ignacio Gonzalez-Rojas

London School of Economics and Political Science

February 16, 2026

Today: From Panel Data to Experiments

Bertrand & Mullainathan (2004)

Last Week

  • First differencing eliminates time-invariant unobservables
  • Event studies with dynamic treatment effects
  • Clustering standard errors for panel data

Today: what if the treatment itself is time-invariant?

Observational Data Cannot Identify Discrimination

Persistent Racial Gaps Demand an Explanation

Labour Market Facts

The Facts

  • African-Americans are twice as likely to be unemployed as Whites
  • African-Americans earn 25% less when employed
  • These gaps have persisted for decades

The Debate

Three competing views:

  • Employers exhibit racial bias
  • Market forces eliminate discrimination
  • Data limitations prevent causal inference

The debate hinges on identification

Unobservable Skills Prevent Causal Inference

The OVB Problem

\[\begin{align*} \log(\text{wage}_{it}) = \beta_{0} &+ \beta_{1}\mathbb{1}[\text{race}_{i} = \text{African American}] \\ &+ \beta_{2}\text{education}_{it} + \beta_{3}\text{experience}_{it} + e_{it} \end{align*}\]

  • Employers observe more than researchers
    • Unobservable skills are likely correlated with race
  • Selection into employment biases wage estimates

Panel Methods Cannot Identify Racial Discrimination

The Treatment Is Time-Invariant

\[\Delta\log(\text{wage}_{it}) = \beta_{1}\underbrace{\Delta\mathbb{1}[\text{race}_{i}]}_{= \, 0} + \beta_{2}\Delta\text{education}_{it} + \beta_{3}\Delta\text{experience}_{it} + \Delta e_{it}\]

  • Race does not change over time
    • First differencing eliminates the variable of interest
  • FE and FD cannot identify time-invariant treatments

Experiments Can Identify Discrimination

Bertrand and Mullainathan Designed a Correspondence Study

Bertrand & Mullainathan (2004)

Design

  • Send fictitious resumes to real job postings
  • Randomly assign racial signals through names
  • Track callback rates by perceived race
  • Four resumes per advertisement

Sample

  • ~1,300 job advertisements
  • ~5,000 resumes sent
  • Boston Globe and Chicago Tribune
  • July 2001 – May 2002

Names Signal Perceived Race

From Massachusetts Birth Certificates

White Names

  • Female: Emily, Anne, Allison, Sarah
  • Male: Brad, Greg, Geoffrey, Brendan

African-American Names

  • Female: Lakisha, Tanisha, Keisha, Aisha
  • Male: Jamal, Jermaine, Kareem, Darnell

Four resumes per ad: one high + one low quality per racial group

Resume Quality Is Experimentally Varied

High versus Low Quality Dimensions

Dimension High Quality Low Quality
Experience More years Less experience
Skills Computer skills Basic skills only
Certifications External certifications None
Extras Volunteering, email address Employment gaps

Randomisation Breaks Omitted Variable Bias

The Identification Strategy

\[\mathbb{E}\left[e_{it} \mid \mathbb{1}[\text{race}_{i}]\right] = 0\]

Names are randomly assigned — all characteristics balanced. Simple regression gives a causal estimate. No controls needed.

Balance Tests Confirm Randomisation Worked

One rejection out of nine tests at 5% significance

Variable White mean African-American mean p-value
Years of experience 7.86 7.83 0.85
University education 0.72 0.72 0.61
Computer skills 0.81 0.83 0.03
Special skills 0.33 0.33 0.83
Volunteer work 0.41 0.41 0.68
Work in school 0.56 0.56 0.84
Employment holes 0.45 0.45 0.77
Military experience 0.09 0.10 0.27
Number of jobs 3.66 3.66 0.86

White Names Receive 50% More Callbacks

White Applicants Receive Significantly More Callbacks

Callback Rates by Race

The difference is 3.20 percentage points — equivalent to approximately 8 years of additional experience.

The Regression Confirms the Gap

Estimated Model

\[\widehat{\mathbb{1}[\text{callback}]}_{i} = \underset{(0.0055)}{0.0965} - \underset{(0.0078)}{0.0320} \cdot \mathbb{1}[\text{race}_{i} = \text{African American}]\]

The Intercept

\(\hat{\beta}_{0} = 0.0965\) is the White callback rate.

When the African-American indicator equals zero, the predicted callback rate equals the intercept.

The Slope

\(\hat{\beta}_{0} + \hat{\beta}_{1} = 0.0645\) is the African-American callback rate.

The coefficient measures the racial gap: \(-0.0320\) or \(-3.20\) percentage points.

No controls needed — randomisation ensures \(\text{Cov}(\mathbb{1}[\text{race}_{i}], e_{i}) = 0\).

The Regression Coefficient Equals the Difference in Means

A Mechanical Property of OLS

\[\hat{\beta}_{1} = \bar{y}_{\text{African American}} - \bar{y}_{\text{White}} = 0.0645 - 0.0965 = -0.0320\]

  • Single binary regressor → OLS slope = difference in group means
  • The t-test on \(\hat{\beta}_{1}\) is identical to a two-sample t-test
  • Mechanical property of OLS — proved in Topic 2

Observational Studies Cannot Achieve This

Experiments vs Panel Data

Observational + Panel Experiment
Identification OVB: skills confound race \(\mathbb{E}[e_{it} \mid \text{race}_{i}] = 0\) by design
Time-invariant treatment FD eliminates the variable of interest Randomisation sidesteps the problem
Controls Never fully resolve confounding Not needed

Randomisation solves what panel data cannot

Improving Your CV Helps Only If You Are White

Discrimination Interacts with Resume Quality

Callback Rates by Race and Quality

White low-quality applicants (8.50%) receive more callbacks than African-American high-quality applicants (6.70%).

Discrimination Distorts Human Capital Incentives

The Returns Trap

  • White applicants: +27% return to quality (8.50% → 10.79%, p = 0.055)
  • African-American applicants: +8% return to quality (6.19% → 6.70%, p = 0.604)
  • If improving your CV does not improve your chances
    • Rational to underinvest in skills
  • Discrimination turns a fairness problem into an efficiency problem

The Gap Is Uniform Across the Labour Market

Robustness

The racial gap is consistent across:

  • All occupations (sales, administrative, clerical)
  • All industries
  • All employer sizes (small and large firms)
  • Federal contractors do not discriminate less
  • Equal Opportunity Employers do not discriminate less

“Lexicographic search” — employers may stop reading at African-American names

The Study Has Four Limitations

Scope and External Validity

  1. Callbacks ≠ hiring decisions
    • First hurdle only — but fewer interviews mechanically reduce offers
  2. Names signal perceived race, not race per se
    • No correlation between callbacks and mother’s education within race (Table 8)
  3. Newspaper advertisements are one channel
    • Social networks and online platforms may differ
  4. External validity
    • Boston and Chicago, 2001–2002

Three Lessons from the Discrimination Experiment

  1. Time-invariant treatments cannot be identified with panel methods
    • Race does not change over time — first differencing eliminates the variable of interest
  2. Randomisation solves identification
    • Because names are randomly assigned: \(\mathbb{E}[e_{i} \mid \text{race}_{i}] = 0\)
    • No omitted variable bias, no controls needed
  3. Returns to quality are asymmetric — discrimination distorts incentives
    • White applicants: +27% return to quality (p = 0.055)
    • African-American applicants: +8% return to quality (p = 0.604)

After Reading Week: Differences-in-Differences

Topic 8, Part 3

No class next week — reading week.

After reading week: finishing Topic 8 with differences-in-differences.

  • Combining cross-sectional and time-series variation
  • The parallel trends assumption
  • Two-way fixed effects

References

References

Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination. American Economic Review, 94(4), 991–1013. https://doi.org/10.1257/0002828042002561